170 research outputs found

    Flow-based intrusion detection

    Get PDF
    The spread of 1-10Gbps technology has in recent years paved the way to a flourishing\ud landscape of new, high-bandwidth Internet services. As users, we depend on the Internet\ud in our daily life for simple tasks such as checking e-mails, but also for managing private\ud and financial information. However, entrusting such information to the Internet also means\ud that the network has become an alluring place for hackers. To this threat, the research\ud community has answered with an increased interest in intrusion detection, and the\ud researchers' attention is focused on developing new techniques to timely detect intruders\ud and prevent damage. Our studies in the field of intrusion detection, however, made us\ud realize that additional research is needed, in particular: the creation of shared data sets to\ud validate Intrusion Detection Systems (IDSs) and the development of automatic procedures\ud to tune the parameters of IDSs.\ud The contribution of this thesis is that it develops a structured approach to intrusion\ud detection that focuses on (i) shared ground-truth data sets and (ii) automatic parameter\ud tuning. We develop our approach by focusing on network flows, which offer an\ud aggregated view of network traffic in terms of the amount of packets and bytes exchanged\ud over the network, and on flow-based time series, which describe how the number of flows,\ud packets and bytes changes over time. We approach the problem of shared ground truth\ud first, by manually creating it, and, second, by proposing network models to generate\ud artificial ground truth. Finally, the performance of an IDS is governed by the trade-off\ud between detecting all anomalies (at the expense of raising alarms too often), and missing\ud anomalies (but not issuing many false alarms). We approach the problem of automatic\ud tuning of IDSs by developing a procedure that aims to mathematically optimize the system\ud performance in a systematic manner

    Measuring exposure in DDoS protection services

    Get PDF
    Denial-of-Service attacks have rapidly gained in popularity over the last decade. The increase in frequency, size, and complexity of attacks has made DDoS Protection Services (DPS) an attractive mitigation solution to which the protection of services can be outsourced. Despite a thriving market and increasing adoption of protection services, a DPS can often be bypassed, and direct attacks can be launched against the origin of a target. Many protection services leverage the Domain Name System (DNS) to protect, e.g., Web sites. When the DNS is misconfigured, the origin IP address of a target can leak to attackers, which defeats the purpose of outsourcing protection. We perform a large-scale analysis of this phenomenon by using three large data sets that cover a 16-month period: a data set of active DNS measurements; a DNS-based data set that focuses on DPS adoption; and a data set of DoS attacks inferred from backscatter traffic to a sizable darknet. We analyze nearly 11k Web sites on Alexa's top 1M that outsource protection, for eight leading DPS providers. Our results show that 40% of these Web sites expose the origin in the DNS. Moreover, we show that the origin of 19% of these Web sites is targeted after outsourcing protection

    Mitigating DDoS attacks using OpenFlow-based software defined networking

    Get PDF
    Over the last years, Distributed Denial-of-Service (DDoS) attacks have become an increasing threat on the Internet, with recent attacks reaching traffic volumes of up to 500 Gbps. To make matters worse, web-based facilities that offer “DDoS-as-a-service” (i.e., Booters) allow for the layman to launch attacks in the order of tens of Gbps in exchange for only a few euros. A recent development in networking is the principle of Software Defined Networking (SDN), and related technologies such as OpenFlow. In SDN, the control plane and data plane of the network are decoupled. This has several advantages, such as centralized control over forwarding decisions, dynamic updating of forwarding rules, and easier and more flexible network configuration. Given these advantages, we expect SDN to be well-suited for DDoS attack mitigation. Typical mitigation solutions, however, are not built using SDN. In this paper we propose to design and to develop an OpenFlow-based mitigation architecture for DDoS attacks. The research involves looking at the applicability of OpenFlow, as well as studying existing solutions built on other technologies. The research is as yet in its beginning phase and will contribute towards a Ph.D. thesis after four years

    Evaluating Third-Party Bad Neighborhood Blacklists for Spam Detection

    Get PDF
    The distribution of malicious hosts over the IP address space is far from being uniform. In fact, malicious hosts tend to be concentrate in certain portions of the IP address space, forming the so-called Bad Neighborhoods. This phenomenon has been previously exploited to filter Spam by means of Bad Neighborhood blacklists. In this paper, we evaluate how much a network administrator can rely upon different Bad Neighborhood blacklists generated by third-party sources to fight Spam. One could expect that Bad Neighborhood blacklists generated from different sources contain, to a varying degree, disjoint sets of entries. Therefore, we investigate (i) how specific a blacklist is to its source, and (ii) whether different blacklists can be interchangeably used to protect a target from Spam. We analyze five Bad Neighborhood blacklists generated from real-world measurements and study their effectiveness in protecting three production mail servers from Spam. Our findings lead to several operational considerations on how a network administrator could best benefit from Bad Neighborhood-based Spam filtering

    Characterizing and mitigating the DDoS-as-a-Service phenomenon

    Get PDF
    Distributed Denial of Service (DDoS) attacks are an increasing threat on the Internet. Until a few years ago, these types of attacks were only launched by people with advanced knowledge of computer networks. However, nowadays the ability to launch attacks have been offered as a service to everyone, even to those without any advanced knowledge. Booters are online tools that offer DDoS-as-a-Service. Some of them advertise, for less than U$ 5, up to 25 Gbps of DDoS traffic, which is more than enough to make most hosts and services on the Internet unavailable. Booters are increasing in popularity and they have shown the success of attacks against third party services, such as government websites; however, there are few mitigation proposals. In addition, existing literature in this area provides only a partial understanding of the threat, for example by analyzing only a few aspects of one specific Booter. In this paper, we propose mitigation solutions against DDoS-as-a-Service that will be achieved after an extensive characterization of Booters. Early results show 59 different Booters, which some of them do not deliver what is offered. This research is still in its initial phase and will contribute to a Ph.D. thesis after four years

    Inside Dropbox: Understanding Personal Cloud Storage Services

    Get PDF
    Personal cloud storage services are gaining popularity. With a rush of providers to enter the market and an increasing of- fer of cheap storage space, it is to be expected that cloud storage will soon generate a high amount of Internet traffic. Very little is known about the architecture and the perfor- mance of such systems, and the workload they have to face. This understanding is essential for designing efficient cloud storage systems and predicting their impact on the network. This paper presents a characterization of Dropbox, the leading solution in personal cloud storage in our datasets. By means of passive measurements, we analyze data from four vantage points in Europe, collected during 42 consecu- tive days. Our contributions are threefold: Firstly, we are the first to study Dropbox, which we show to be the most widely-used cloud storage system, already accounting for a volume equivalent to around one third of the YouTube traffic at campus networks on some days. Secondly, we characterize the workload typical users in different environments gener- ate to the system, highlighting how this reflects on network traffic. Lastly, our results show possible performance bot- tlenecks caused by both the current system architecture and the storage protocol. This is exacerbated for users connected far from control and storage data-center

    Anomaly Characterization in Flow-Based Traffic Time Series

    Get PDF
    Abstract. The increasing number of network attacks causes growing problems for network operators and users. Not only do these attacks pose direct security threats to our infrastructure, but they may also lead to service degradation, due to the massive traffic volume variations that are possible during such attacks. The recent spread of Gbps network technology made the problem of detecting these attacks harder, since existing packet-based monitoring and intrusion detection systems do not scale well to Gigabit speeds. Therefore the attention of the scientific community is shifting towards the possible use of aggregated traffic metrics. The goal of this paper is to investigate how malicious traffic can be characterized on the basis of such aggregated metrics, in particular by using flow, packet and byte frequency variations over time. The contribution of this paper is that it shows, based on a number of real case studies on high-speed networks, that all three metrics may be necessary for proper time series anomaly characterization.

    Autonomic Parameter Tuning of Anomaly-Based IDSs: an SSH Case Study

    Get PDF
    Anomaly-based intrusion detection systems classify network traffic instances by comparing them with a model of the normal network behavior. To be effective, such systems are expected to precisely detect intrusions (high true positive rate) while limiting the number of false alarms (low false positive rate). However, there exists a natural trade-off between detecting all anomalies (at the expense of raising alarms too often), and missing anomalies (but not issuing any false alarms). The parameters of a detection system play a central role in this trade-off, since they determine how responsive the system is to an intrusion attempt. Despite the importance of properly tuning the system parameters, the literature has put little emphasis on the topic, and the task of adjusting such parameters is usually left to the expertise of the system manager or expert IT personnel. In this paper, we present an autonomic approach for tuning the parameters of anomaly-based intrusion detection systems in case of SSH traffic. We propose a procedure that aims to automatically tune the system parameters and, by doing so, to optimize the system performance. We validate our approach by testing it on a flow-based probabilistic detection system for the detection of SSH attacks

    DDoS Mitigation:A Measurement-Based Approach

    Get PDF
    Society heavily relies upon the Internet for global communications. Simultaneously, Internet stability and reliability are continuously subject to deliberate threats. These threats include (Distributed) Denial-of-Service (DDoS) attacks, which can potentially be devastating. As a result of DDoS, businesses lose hundreds of millions of dollars annually. Moreover, when it comes to vital infrastructure, national safety and even lives could be at stake. Effective defenses are therefore an absolute necessity. Prospective users of readily available mitigation solutions find themselves having many shapes and sizes to choose from, the right fit of which may, however, not always be apparent. In addition, the deployment and operation of mitigation solutions may come with hidden hazards that need to be better understood. Policy makers and governments also find themselves facing questions concerning what needs to be done to promote cybersafety on a national level. Developing an optimal course of action to deal with DDoS, therefore, also brings about societal challenges. Even though the DDoS problem is by no means new, the scale of the problem is still unclear. We do not know exactly what it is we are defending against and getting a better understanding of attacks is essential to addressing the problem head-on. To advance situational awareness, many technical and societal challenges need still to be tackled. Given the central importance of better understanding the DDoS problem to improve overall Internet security, the thesis that we summarize in this paper has three main contributions. First, we rigorously characterize attacks and attacked targets at scale. Second, we advance knowledge about the Internet-wide adoption, deployment and operational use of various mitigation solutions. Finally, we investigate hidden hazards that can render mitigation solutions altogether ineffective
    • …
    corecore